21 research outputs found

    Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars

    Full text link
    Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.Comment: 9 pages, 8 figures, 6 tables. Video: https://youtu.be/_r_bsjkJTH

    Novel Multi-Feature Bag-of-Words Descriptor via Subspace Random Projection for Efficient Human-Action Recognition

    Get PDF
    Human-action recognition through local spatio-temporal features have been widely applied because of their simplicity and its reasonable computational complexity. The most common method to represent such features is the well-known Bag-of-Words approach, which turns a Multiple-Instance Learning problem into a supervised learning one, which can be addressed by a standard classifier. In this paper, a learning framework for human-action recognition that follows the previous strategy is presented. First, spatio-temporal features are detected. Second, they are described by HOG-HOF descriptors, and then represented by a Bag of Words approach to create a feature vector representation. The resulting high dimensional features are reduced by means of a subspace-random-projection technique that is able to retain almost all the original information. Lastly, the reduced feature vectors are delivered to a classifier called Citation K-Nearest Neighborhood, especially adapted to Multiple-Instance Learning frameworks. Excellent results have been obtained, outperforming other state-of-the art approaches in a public database

    Human-computer interaction based on visual hand-gesture recognition using volumetric spatiograms of local binary patterns

    Get PDF
    A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art

    Development of a hand-gesture recognition system for human-computer interaction

    Full text link
    The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is composed of three stages: detection, tracking and recognition. This system is based on machine learning methods and pattern recognition techniques, which have been integrated together with other image processing approaches to get a high recognition accuracy and a low computational cost. Regarding pattern recongition techniques, several algorithms and strategies have been designed and implemented, which are applicable to color images and video sequences. The design of these algorithms has the purpose of extracting spatial and spatio-temporal features from static and dynamic hand gestures, in order to identify them in a robust and reliable way. Finally, a visual database containing the necessary vocabulary of gestures for interacting with the computer has been created

    Development of a hand-gesture recognition system for human-computer interaction

    No full text
    The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is composed of three stages: detection, tracking and recognition. This system is based on machine learning methods and pattern recognition techniques, which have been integrated together with other image processing approaches to get a high recognition accuracy and a low computational cost. Regarding pattern recongition techniques, several algorithms and strategies have been designed and implemented, which are applicable to color images and video sequences. The design of these algorithms has the purpose of extracting spatial and spatio-temporal features from static and dynamic hand gestures, in order to identify them in a robust and reliable way. Finally, a visual database containing the necessary vocabulary of gestures for interacting with the computer has been created

    DroNet: Learning to Fly by Driving

    No full text
    Civilian drones are soon expected to be used in a wide variety of tasks, such as aerial surveillance, delivery, or monitoring of existing architectures. Nevertheless, their deployment in urban environments has so far been limited. Indeed, in unstructured and highly dynamic scenarios, drones face numerous challenges to navigate autonomously in a feasible and safe way. In contrast to traditional “map-localize-plan” methods, this letter explores a data-driven approach to cope with the above challenges. To accomplish this, we propose DroNet: a convolutional neural network that can safely drive a drone through the streets of a city. Designed as a fast eight-layers residual network, DroNet produces two outputs for each single input image: A steering angle to keep the drone navigating while avoiding obstacles, and a collision probability to let the UAV recognize dangerous situations and promptly react to them. The challenge is however to collect enough data in an unstructured outdoor environment such as a city. Clearly, having an expert pilot providing training trajectories is not an option given the large amount of data required and, above all, the risk that it involves for other vehicles or pedestrians moving in the streets. Therefore, we propose to train a UAV from data collected by cars and bicycles, which, already integrated into the urban environment, would not endanger other vehicles and pedestrians. Although trained on city streets from the viewpoint of urban vehicles, the navigation policy learned by DroNet is highly generalizable. Indeed, it allows a UAV to successfully fly at relative high altitudes and even in indoor environments, such as parking lots and corridors. To share our findings with the robotics community, we publicly release all our datasets, code, and trained networks

    Workfunction fluctuations in polycrystalline TiN observed with KPFM and their impact on MOSFETs variability

    Get PDF
    A more realistic approach to evaluate the impact of polycrystalline metal gates on the MOSFET variability is presented. 2D experimental workfunction maps of a polycrystalline TiN layer were obtained by Kelvin Probe Force Microscopy with a nanometer resolution. These data were the input of a device simulator, which allowed us to evaluate the effect of the workfunction fluctuations on MOSFET performance variability. We have demonstrated that in the modelling of TiN workfunction variability not only the different workfunctions of the grains but also the grain boundaries should be included

    A CAFM and device level study of MIS structures with graphene as interfacial layer for ReRAM applications

    No full text
    Capacitive Metal-Insulator-Semiconductor structures with graphene as interfacial layer between the HfO dielectric and the top electrode have been fabricated and investigated at device level and at the nanoscale with Conductive Atomic Force Microscope. In particular, their electrical properties and variability have been compared to devices without graphene to evaluate their feasibility as ReRAM devices. At device level, we observe that, when graphene is present as an intercalated layer, several resistive switching cycles can be measured, meanwhile the standard structures without graphene do not show resistive switching behavior. Nanoscale analysis showed that the graphene layer prevents the microstructural irreversible damage of the oxide material during a forming process. Therefore, graphene somehow protects the structure during the CF formation. This protection would explain the observation of RS of the devices with intercalated graphene

    A neo-Austrian approach to computable equilibrium models: time to build technologies and imperfect markets

    Get PDF
    SIGLEBibliothek Weltwirtschaft Kiel C 152120 / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman
    corecore